MarkovpropertyLet X(t) denote the state of the system at time t . For a future point of time t, the stateX(t)isarandom variable,and (X(t),t ≥ O) is calledastochasticprocess with continuoustime.Let Hsdenotethe"history"of theprocess upto time s.Thishistory contains informationabout which states thesystem has visited fromwhen it was put into operationat time0and until time s.Astochastic process is said to bea Markovprocess if forall s,t ≥O,all nonnegative integersi,j,andforall possiblehistoriesHs,wehavethatPr(X(t+s)=jlX(s)=inHs)=Pr(X(t+s)=jlX(s)=i)NTNU-TrondheimNorwegian University of梦ScienceandTechnologywww.ntnu.edu
6 Markov property Let ܺሺݐሻ denote the state of the system at time ݐ . For a future point of time ݐ ,the state ܺሺݐሻ is a random variable, and ሼܺሺݐሻ, ݐ 0ሽ is called a stochastic process with continuous time. Let ௌ denote the “history” of the process up to time ݏ .This history contains information about which states the system has visited from when it was put into operation at time 0 and until time ݏ. A stochastic process is said to be a Markov process if for all ݏ ,ݐ ≤ 0, all nonnegative integers ݅, ݆ , and for all possible histories ௌ , we have that Pr(ܺሺݐ ݏሻ=݆|ܺሺݏሻ= ݅∩ௌ) =Pr(ܺሺݐ ݏሻ=݆|ܺሺݏሻ= ݅)
MarkovpropertyIfweconsiderasystemthat isinstateiattimes,theprobabilitythatthissystemwill beinstatejattimet+sisindependentofwhathappenedtothesystemuptotimes,thatis,ofthehistoryHs.This meansthattheprocess hasnomemory.Wewill furtherassumethattheMarkovprocessforall i,iinxfulfillsPr(X(t + s) = jlX(s) =i) = Pr(X(t)=jlX(O) =i) forall s,t ≥ 0Itmeansthattheprobabilityofatransitionfromstateitostatejdoesnotdependontheglobal timeand onlydependsonthetimeinterval availableforthetransition.Aprocess with this property is knownas a process with stationarytransition probabilities,orasatime-homogeneousprocess.NTNU-TrondheimNorwegian University of梦Science and Technologywww.ntnu.edu
7 Markov property If we consider a system that is in state ݅ at time ݏ ,the probability that this system will be in state ݆ at time ݐ ݏ is independent of what happened to the system up to time ݏ ,that is, of the history ௌ . This means that the process has no memory. We will further assume that the Markov process for all ݅, ݆ in ߯ fulfills Pr ܺ ݐ ݏ ൌ ݆|ܺ ݏ ൌ ݅ ൌ Pr ሺܺ ݐ ൌ ݆|ܺ 0 ൌ ݅ሻ for all ݏ ,ݐ 0 It means that the probability of a transition from state ݅ to state ݆ does not depend on the global time and only depends on the time interval available for the transition. A process with this property is known as a process with stationary transition probabilities, or as a time‐homogeneous process
8TransitionprobabilityThetransitionprobabilitiesoftheMarkovprocessPuj(t) = Pr(X(t) = jlX(0) = i)maybearrangedasamatrixPoi(t)Por(t)(Poo(t)Pii(t)Pir(t)Pio(t)P(t) =:(Pro(t)Pri(t)Prr(t),When a process is instateiat time O,it must eitherbe in stateiat timet orhave madeatransition to a different state.We must therefore havePj;(t) = 1=0NTNU-Trondheim福NorwegianLniversityofScience and Technologywww.ntnu.edu
8 Transition probability The transition probabilities of the Markov process ܲሺݐሻ ൌ Pr ሺܺ ݐ ൌ ݆|ܺ 0 ൌ ݅ሻ may be arranged as a matrix ۾ሺݐሻ ൌ ܲሺݐሻ ܲଵሺݐሻ ܲଵሺݐሻ ܲଵଵሺݐሻ ⋯ ܲሺݐሻ ܲଵሺݐሻ ⋮ ⋱⋮ ܲሺݐሻ ܲଵሺݐሻ ⋯ ܲሺݐሻ When a process is in state ݅ at time 0, it must either be in state ݅ at time ݐ or have made a transition to a different state. We must therefore have ൌ1 ݐ ܲ ୀ
0TrajactoryState654321S3S4S1S2SsTimeLet 0=So≤Si≤S2..be the times at which transitions occur,and let T,=Si+1-S,betheithinteroccurrencetime,orsojourntime,fori=1,2,.ApossibletrajectoryofaMarkovprocessisillustratedabove.WedefineS,suchthat transition itakesplaceimmediatelybefore Si,in whichcasethetrajectoryoftheprocessiscontinuousfromtheright.NTNU-Trondheim?NorwegianLniversityofScience and Technologywww.ntnu.edu
9 Trajactory Let 0ൌܵ ܵଵ ܵଶ ∙ ∙ ∙ be the times at which transitions occur, and let ܶ ൌ ܵାଵ െ ܵ be the ݅th interoccurrence time, or sojourn time, for ݅ ൌ 1,2,. A possible trajectory of a Markov process is illustrated above. We define ܵ such that transition ݅ takes place immediately before ܵ, in which case the trajectory of the process is continuous from the right
10Timein stateAMarkov process enters state i at time O, such that X(O)=i. LetT,be the sojourn time instate i. [Note that Ti denotes the ith interoccurrence time, while T, is the time spent during avisit to state i.jWewant to find Pr(T,> t).We observethat the process is still in state i at time s,that is,T, > s, and are interested in finding the probability that it will remain in state i for t timeunits more.Wehencewant to find Pr(T, >t+s|T>s).SincetheprocesshastheMarkovproperty,theprobabilityfortheprocessto stayfortmoretime units is determined only by the current state i.The fact that the process has beenstayingthereforstimeunitsisthereforeirrelevant.ThusPr(T, >t + s|T, >s) = Pr(T, >t)fors,t≥0HenceT,ismemoryless andmustbe exponentiallydistributed.NTNU-TrondheimNorwegianUniversityofScienceandTechnologywww.ntnu.edu
10 Time in state A Markov process enters state ݅ at time 0, such that ܺሺ0ሻ ൌ ݅. Let ܶ ෩ be the sojourn time in state ݅. [Note that ܶ denotes the ݅th interoccurrence time, while ܶ ෩ is the time spent during a visit to state ݅.] We want to find Pr ሺܶ ෩ ݐሻ. We observe that the process is still in state ݅ at time ݏ ,that is, ܶ ෩ ݏ ,and are interested in finding the probability that it will remain in state ݅ for ݐ time units more. We hence want to find Pr ሺܶ ෩ ܶ| ݏ ݐ ෩ .ሻݏ Since the process has the Markov property, the probability for the process to stay for t more time units is determined only by the current state ݅. The fact that the process has been staying there for ݏ time units is therefore irrelevant. Thus Pr ܶ ෩ ܶݏݐ ෩ ݏ ൌ Pr ܶ ෩ 0 ݐ ,ݏ for ݐ Hence ܶ ෩ is memoryless and must be exponentially distributed