Jump to content

Kolmogorov equations

From Wikipedia, the free encyclopedia
(Redirected from Forward equation)

In probability theory, Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, characterize continuous-time Markov processes. In particular, they describe how the probability of a continuous-time Markov process in a certain state changes over time.

Diffusion processes vs. jump processes

[edit]

Writing in 1931, Andrei Kolmogorov started from the theory of discrete time Markov processes, which are described by the Chapman–Kolmogorov equation, and sought to derive a theory of continuous time Markov processes by extending this equation. He found that there are two kinds of continuous time Markov processes, depending on the assumed behavior over small intervals of time:

If you assume that "in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical",[1] then you are led to what are called jump processes.

The other case leads to processes such as those "represented by diffusion and by Brownian motion; there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small".[1]

For each of these two kinds of processes, Kolmogorov derived a forward and a backward system of equations (four in all).

History

[edit]

The equations are named after Andrei Kolmogorov since they were highlighted in his 1931 foundational work.[2]

William Feller, in 1949, used the names "forward equation" and "backward equation" for his more general version of the Kolmogorov's pair, in both jump and diffusion processes.[1] Much later, in 1956, he referred to the equations for the jump process as "Kolmogorov forward equations" and "Kolmogorov backward equations".[3]

Other authors, such as Motoo Kimura,[4] referred to the diffusion (Fokker–Planck) equation as Kolmogorov forward equation, a name that has persisted.

The modern view

[edit]

Continuous-time Markov chains

[edit]

The original derivation of the equations by Kolmogorov starts with the Chapman–Kolmogorov equation (Kolmogorov called it fundamental equation) for time-continuous and differentiable Markov processes on a finite, discrete state space.[2] In this formulation, it is assumed that the probabilities are continuous and differentiable functions of , where (the state space) and are the final and initial times, respectively. Also, adequate limit properties for the derivatives are assumed. Feller derives the equations under slightly different conditions, starting with the concept of purely discontinuous Markov process and then formulating them for more general state spaces.[5] Feller proves the existence of solutions of probabilistic character to the Kolmogorov forward equations and Kolmogorov backward equations under natural conditions.[5]

For the case of a countable state space we put in place of . The Kolmogorov forward equations read

,

where is the transition rate matrix (also known as the generator matrix),

while the Kolmogorov backward equations are

The functions are continuous and differentiable in both time arguments. They represent the probability that the system that was in state at time jumps to state at some later time . The continuous quantities satisfy

Relation with the generating function

[edit]

Still in the discrete state case, letting and assuming that the system initially is found in state , the Kolmogorov forward equations describe an initial-value problem for finding the probabilities of the process, given the quantities . We write where , then

For the case of a pure death process with constant rates the only nonzero coefficients are . Letting

the system of equations can in this case be recast as a partial differential equation for with initial condition . After some manipulations, the system of equations reads,[6]

An example from biology

[edit]

One example from biology is given below:[7]

This equation is applied to model population growth with birth. Where is the population index, with reference the initial population, is the birth rate, and finally , i.e. the probability of achieving a certain population size.

The analytical solution is:[7]

This is a formula for the probability in terms of the preceding ones, i.e. .

See also

[edit]

References

[edit]
  1. ^ a b c Feller, W. (1949). "On the Theory of Stochastic Processes, with Particular Reference to Applications". Proceedings of the (First) Berkeley Symposium on Mathematical Statistics and Probability. Vol. 1. University of California Press. pp. 403–432.
  2. ^ a b Kolmogorov, Andrei (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung" [On Analytical Methods in the Theory of Probability]. Mathematische Annalen (in German). 104: 415–458. doi:10.1007/BF01457949. S2CID 119439925.
  3. ^ Feller, William (1957). "On Boundaries and Lateral Conditions for the Kolmogorov Differential Equations". Annals of Mathematics. 65 (3): 527–570. doi:10.2307/1970064. JSTOR 1970064.
  4. ^ Kimura, Motoo (1957). "Some Problems of Stochastic Processes in Genetics". Annals of Mathematical Statistics. 28 (4): 882–901. doi:10.1214/aoms/1177706791. JSTOR 2237051.
  5. ^ a b Feller, Willy (1940) "On the Integro-Differential Equations of Purely Discontinuous Markoff Processes", Transactions of the American Mathematical Society, 48 (3), 488-515 JSTOR 1990095
  6. ^ Bailey, Norman T.J. (1990) The Elements of Stochastic Processes with Applications to the Natural Sciences, Wiley. ISBN 0-471-52368-2 (page 90)
  7. ^ a b Logan, J. David; Wolesensky, William R. (2009). Mathematical Methods in Biology. Pure and Applied Mathematics. John Wiley& Sons. pp. 325–327. ISBN 978-0-470-52587-6.