This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Of both markovian processes and non markovian process. Please include simple clear examples. Scientifically or mathematically important examples. Examples that test the limits of the definition. —Preceding unsigned comment added by 72.194.126.236 ( talk) 08:42, 2 February 2011 (UTC)
In addition to the above poster's comment: Also, the existing example of a non-markovian process (the one with the coins in the purse) is not very good. If X6=0.50, then the only possible way is to have one coin of 0.25 and 5 coins of 0.5 - anything else would not give exactly 0.50 with 6 coins. So in that case the argument that knowing only X6 imparts less information than knowing also the history actually does not hold. — Preceding unsigned comment added by 85.76.128.218 ( talk) 07:42, 15 July 2014 (UTC)
Isn't the example constructed so that the "state" is only the sum of the values of the current coins? (I.e. that there are six coins, and not some other number that add up to fifty cents, is not included.) Perhaps there could be some connection to the subsequent remarks about Markov representations of non-Markov processes, where we note that if "state" means "total value of coins drawn so far," then this is a non-Markov process, but if "state" means "vector of counts of coin denominations drawn so far," then this is a Markov process. — Preceding unsigned comment added by 50.58.96.2 ( talk) 18:41, 22 December 2014 (UTC)
In the example sections shouldn't P be :
P = | 0.85 0.10 | | 0.15 0.90 |
instead of
P = | 0.85 0.15 | | 0.10 0.90 |
Since the column vector should add up to be 1. — Preceding unsigned comment added by 98.210.98.98 ( talk) 16:40, 3 August 2014 (UTC)
I will just point out some inconsistencies and mistakes in this article:
A Markov process is, as translations from Russian state, "a process without after-effect" This sounds as if the word 'Markov' translates to 'without after-effect'. In fact Markov is the name of a Russian mathematician. It should be made clear to which original text the translation refers to.
From the standard definitions of drift and diffusion coefficients this means that those coefficients may depend at most on a single point (x, t) and on no history at all. Some author seems to have mixed the concepts of 'Ito-process' and a 'Markov-process'. There is no 'drift and diffusion coefficient' of a general Markov process. The whole first paragraph is very confusing and seems to be more about Ito-processes than about Markov-processes
The Markov property is stated twice with different definitions. -- 128.130.51.57 ( talk) 13:58, 26 March 2008 (UTC)
This article should be merged with the article Markov chain. As far as I can tell, this article doesn't actually say anything that the Markov chain article doesn't say. linas 15:56, 4 September 2005 (UTC)
I have clarified the second paragraph slightly. LS
It would be good to have an introductory sentence that introduces the subject to a layperson. For example, see [1]. Jim 18:51, 24 March 2006 (UTC)
For mathematicians, usually Markov process means continuous time, whereas Markov chain means discrete time. Markov processes are often used in chemistry and biology, and their properties are different than Markov chains -- would be nice to have a separate entry. Leboudec 16:24, 15 May 2006 (UTC)
Markov chain is only for countable set of states. I thought Markov process might contain continuous valued version. (c.f. Mathworld [2]) Memming 13:15, 24 September 2006 (UTC)
Are not there too many pages about the same thing? M. chain, M.property, continuous time M.process, etc. I propose the following: 1) merge M. process with M. property; this will include the def. both for discrete and cont. time 2) then there will be a link to M. chain for properties specific for discrete time, and to cont.-time M. proc. - for properties specific for cont. time 3) To the latter (cont. time) one should add geometrical properties such as the Bakry-Emery condition. Convergence to equilibrium should also appear somewhere - maybe, in the first page (M. process?)
What do you think? Sodin 14:38, 1 March 2007 (UTC)
I agree that merging the content from Markov property into this page would be suitable. LachlanA 02:25, 30 July 2007 (UTC)
Yes, one of us has to get to work and clean up the Markov sections. I suggest one entry called Markov Processes. Under this heading we say that "chain" refers to discrete time (see Meyn & Tweedie 2005, [1]). Most of the chapter heading will deal with discrete time, but one subsection can be entitled continuous time. This will contain a link to a wiki diffusions entry.
Spmeyn ( talk) 15:25, 11 December 2007 (UTC)
Why not merge all pages but maintain them in separate sections on the Markov Chain page? —Preceding unsigned comment added by 128.200.138.240 ( talk) 18:17, 8 October 2007 (UTC)
In the back of my mind, I tend to draw this (possibly incorrect) mental model of a Markov process as some function on a measure-preserving dynamical system, but where the underlying time evolution is not made explicit (or is not explicitly known). I think my mental model is correct (?) ... but it occurs to me: are there *any* examples of Markov processes that cannot be formulated as functions on a measure-preserving dynamical system? If so, can examples of these be given? linas ( talk) 00:15, 10 April 2008 (UTC)
I found this article fairly messy and had a go at cleaning it up (I was not signed in—sorry); an expert in the area should validate that it's still correct. In particular I removed what appeared to be an extra formal definition; my understanding was that this extra definition was equivalent to the first. If the extra definition is needed for some reason, please explain the difference between the two. Ezrakilty ( talk) 21:05, 19 October 2008 (UTC)
Having now caught up on the competition, I support the merger of Markov process and Markov chain. The latter is much more detailed and clear, and seems to encompass all of the former. —Preceding unsigned comment added by Ezrakilty ( talk • contribs) 21:09, 19 October 2008 (UTC)
Is it true that given a Markovian process with probability density , one can take the Taylor expansion in time and, in a limiting procedure, get rid of the 2nd and higher-order terms, like so,
whereas, for a non-Markovian process, one cannot necessarily do this? In other words, to the increasingly higher-order time terms reach information about states in the increasingly distant past? (If so, this could be useful information to put in the Taylor series article.) Zeroparallax ( talk) 07:05, 15 February 2009 (UTC)
I think that the current "formal definition" is a little weak. Given that the "state" is an uncountable set, with the current definition one may construct a processes that is not Markovian, but satisfies the condition. A better definition would be that is a Markov process if for each whenever we have that
Does anyone agree? —Preceding unsigned comment added by 86.176.241.179 ( talk) 10:16, 31 October 2009 (UTC)
The article mentions a certain case that is a second-order Markov process, but no where does it say what would be a first-order Markov process. If it is going to talk about "order" at all, it should define "order". DMJ001 ( talk) 04:55, 11 August 2011 (UTC)
FWIW I put Gibbs measure into the see-also list, but really, a whole section should be devoted to this: all markov processes are gibbs measures! The way to see this is that the markov condition means that the markov process is time-invariant, i.e. that it has a 1-dimensional translational symmetry on the lattice of points in time. Symmetry implies a conserved quantity: what, in physics, would be called "energy". The corresponding lagrange multiplier provides the mechanism for holding this constant, and this just means that any markov process it can be written as a gibbs measure built from a partition function (mathematics).
I don't have any references at the tip of my finger that explain this, and am fain to devote time to this; however, I believe that hunting through the usual 'intro to ergodic theory' or 'intro to dynamical systems' will find one or another treatment of this in detail. Allow me a moment to be bombastic, and say that this really shouldn't be mysterious; this really should be stated as a "basic fact" about Markov processes. linas ( talk) 15:31, 17 July 2012 (UTC)
The article Markov property hints at above by talking about the memorylessness of the exponential distribution, without explaining why. A very quick and dirty sketch would be to consider a probability space P and its Cartesian product i.e. its a lattice model. The Markov property says that the only interaction is between nearest neighbors. Thus, the graph aka Markov blanket decomposes into a very simple form, i.e. and the only thing that there is that goes between products and sums is the exponential, i.e. the gibbs measure. for more complex markov blankets, one uses subshift of finite type to get the same. The real issue is that there are so many different kinds of notations in use, e.g. measures on cylinder sets for caretesian products, and etc. that its not always clear that these different concepts are all really the same thing. A rosetta stone translating between them, in some markov-article or another would be ideal... linas ( talk) 16:03, 17 July 2012 (UTC)
There is a move discussion in progress on Talk:Markov chain which affects this page. Please participate on that page and not in this talk page section. Thank you. — RMCD bot 11:30, 6 February 2017 (UTC)
This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Of both markovian processes and non markovian process. Please include simple clear examples. Scientifically or mathematically important examples. Examples that test the limits of the definition. —Preceding unsigned comment added by 72.194.126.236 ( talk) 08:42, 2 February 2011 (UTC)
In addition to the above poster's comment: Also, the existing example of a non-markovian process (the one with the coins in the purse) is not very good. If X6=0.50, then the only possible way is to have one coin of 0.25 and 5 coins of 0.5 - anything else would not give exactly 0.50 with 6 coins. So in that case the argument that knowing only X6 imparts less information than knowing also the history actually does not hold. — Preceding unsigned comment added by 85.76.128.218 ( talk) 07:42, 15 July 2014 (UTC)
Isn't the example constructed so that the "state" is only the sum of the values of the current coins? (I.e. that there are six coins, and not some other number that add up to fifty cents, is not included.) Perhaps there could be some connection to the subsequent remarks about Markov representations of non-Markov processes, where we note that if "state" means "total value of coins drawn so far," then this is a non-Markov process, but if "state" means "vector of counts of coin denominations drawn so far," then this is a Markov process. — Preceding unsigned comment added by 50.58.96.2 ( talk) 18:41, 22 December 2014 (UTC)
In the example sections shouldn't P be :
P = | 0.85 0.10 | | 0.15 0.90 |
instead of
P = | 0.85 0.15 | | 0.10 0.90 |
Since the column vector should add up to be 1. — Preceding unsigned comment added by 98.210.98.98 ( talk) 16:40, 3 August 2014 (UTC)
I will just point out some inconsistencies and mistakes in this article:
A Markov process is, as translations from Russian state, "a process without after-effect" This sounds as if the word 'Markov' translates to 'without after-effect'. In fact Markov is the name of a Russian mathematician. It should be made clear to which original text the translation refers to.
From the standard definitions of drift and diffusion coefficients this means that those coefficients may depend at most on a single point (x, t) and on no history at all. Some author seems to have mixed the concepts of 'Ito-process' and a 'Markov-process'. There is no 'drift and diffusion coefficient' of a general Markov process. The whole first paragraph is very confusing and seems to be more about Ito-processes than about Markov-processes
The Markov property is stated twice with different definitions. -- 128.130.51.57 ( talk) 13:58, 26 March 2008 (UTC)
This article should be merged with the article Markov chain. As far as I can tell, this article doesn't actually say anything that the Markov chain article doesn't say. linas 15:56, 4 September 2005 (UTC)
I have clarified the second paragraph slightly. LS
It would be good to have an introductory sentence that introduces the subject to a layperson. For example, see [1]. Jim 18:51, 24 March 2006 (UTC)
For mathematicians, usually Markov process means continuous time, whereas Markov chain means discrete time. Markov processes are often used in chemistry and biology, and their properties are different than Markov chains -- would be nice to have a separate entry. Leboudec 16:24, 15 May 2006 (UTC)
Markov chain is only for countable set of states. I thought Markov process might contain continuous valued version. (c.f. Mathworld [2]) Memming 13:15, 24 September 2006 (UTC)
Are not there too many pages about the same thing? M. chain, M.property, continuous time M.process, etc. I propose the following: 1) merge M. process with M. property; this will include the def. both for discrete and cont. time 2) then there will be a link to M. chain for properties specific for discrete time, and to cont.-time M. proc. - for properties specific for cont. time 3) To the latter (cont. time) one should add geometrical properties such as the Bakry-Emery condition. Convergence to equilibrium should also appear somewhere - maybe, in the first page (M. process?)
What do you think? Sodin 14:38, 1 March 2007 (UTC)
I agree that merging the content from Markov property into this page would be suitable. LachlanA 02:25, 30 July 2007 (UTC)
Yes, one of us has to get to work and clean up the Markov sections. I suggest one entry called Markov Processes. Under this heading we say that "chain" refers to discrete time (see Meyn & Tweedie 2005, [1]). Most of the chapter heading will deal with discrete time, but one subsection can be entitled continuous time. This will contain a link to a wiki diffusions entry.
Spmeyn ( talk) 15:25, 11 December 2007 (UTC)
Why not merge all pages but maintain them in separate sections on the Markov Chain page? —Preceding unsigned comment added by 128.200.138.240 ( talk) 18:17, 8 October 2007 (UTC)
In the back of my mind, I tend to draw this (possibly incorrect) mental model of a Markov process as some function on a measure-preserving dynamical system, but where the underlying time evolution is not made explicit (or is not explicitly known). I think my mental model is correct (?) ... but it occurs to me: are there *any* examples of Markov processes that cannot be formulated as functions on a measure-preserving dynamical system? If so, can examples of these be given? linas ( talk) 00:15, 10 April 2008 (UTC)
I found this article fairly messy and had a go at cleaning it up (I was not signed in—sorry); an expert in the area should validate that it's still correct. In particular I removed what appeared to be an extra formal definition; my understanding was that this extra definition was equivalent to the first. If the extra definition is needed for some reason, please explain the difference between the two. Ezrakilty ( talk) 21:05, 19 October 2008 (UTC)
Having now caught up on the competition, I support the merger of Markov process and Markov chain. The latter is much more detailed and clear, and seems to encompass all of the former. —Preceding unsigned comment added by Ezrakilty ( talk • contribs) 21:09, 19 October 2008 (UTC)
Is it true that given a Markovian process with probability density , one can take the Taylor expansion in time and, in a limiting procedure, get rid of the 2nd and higher-order terms, like so,
whereas, for a non-Markovian process, one cannot necessarily do this? In other words, to the increasingly higher-order time terms reach information about states in the increasingly distant past? (If so, this could be useful information to put in the Taylor series article.) Zeroparallax ( talk) 07:05, 15 February 2009 (UTC)
I think that the current "formal definition" is a little weak. Given that the "state" is an uncountable set, with the current definition one may construct a processes that is not Markovian, but satisfies the condition. A better definition would be that is a Markov process if for each whenever we have that
Does anyone agree? —Preceding unsigned comment added by 86.176.241.179 ( talk) 10:16, 31 October 2009 (UTC)
The article mentions a certain case that is a second-order Markov process, but no where does it say what would be a first-order Markov process. If it is going to talk about "order" at all, it should define "order". DMJ001 ( talk) 04:55, 11 August 2011 (UTC)
FWIW I put Gibbs measure into the see-also list, but really, a whole section should be devoted to this: all markov processes are gibbs measures! The way to see this is that the markov condition means that the markov process is time-invariant, i.e. that it has a 1-dimensional translational symmetry on the lattice of points in time. Symmetry implies a conserved quantity: what, in physics, would be called "energy". The corresponding lagrange multiplier provides the mechanism for holding this constant, and this just means that any markov process it can be written as a gibbs measure built from a partition function (mathematics).
I don't have any references at the tip of my finger that explain this, and am fain to devote time to this; however, I believe that hunting through the usual 'intro to ergodic theory' or 'intro to dynamical systems' will find one or another treatment of this in detail. Allow me a moment to be bombastic, and say that this really shouldn't be mysterious; this really should be stated as a "basic fact" about Markov processes. linas ( talk) 15:31, 17 July 2012 (UTC)
The article Markov property hints at above by talking about the memorylessness of the exponential distribution, without explaining why. A very quick and dirty sketch would be to consider a probability space P and its Cartesian product i.e. its a lattice model. The Markov property says that the only interaction is between nearest neighbors. Thus, the graph aka Markov blanket decomposes into a very simple form, i.e. and the only thing that there is that goes between products and sums is the exponential, i.e. the gibbs measure. for more complex markov blankets, one uses subshift of finite type to get the same. The real issue is that there are so many different kinds of notations in use, e.g. measures on cylinder sets for caretesian products, and etc. that its not always clear that these different concepts are all really the same thing. A rosetta stone translating between them, in some markov-article or another would be ideal... linas ( talk) 16:03, 17 July 2012 (UTC)
There is a move discussion in progress on Talk:Markov chain which affects this page. Please participate on that page and not in this talk page section. Thank you. — RMCD bot 11:30, 6 February 2017 (UTC)